1,767 research outputs found

    Adaptive Radar Detection of Dim Moving Targets in Presence of Range Migration

    Full text link
    This paper addresses adaptive radar detection of dim moving targets. To circumvent range migration, the detection problem is formulated as a multiple hypothesis test and solved applying model order selection rules which allow to estimate the "position" of the target within the CPI and eventually detect it. The performance analysis proves the effectiveness of the proposed approach also in comparison to existing alternatives.Comment: 5 pages, 2 figures, submitted to IEEE Signal Processing Letter

    A filtering monotonization approach for DG discretizations of hyperbolic problems

    Full text link
    We introduce a filtering technique for Discontinuous Galerkin approximations of hyperbolic problems. Following an approach already proposed for the Hamilton-Jacobi equations by other authors, we aim at reducing the spurious oscillations that arise in presence of discontinuities when high order spatial discretizations are employed. This goal is achieved using a filter function that keeps the high order scheme when the solution is regular and switches to a monotone low order approximation if it is not. The method has been implemented in the framework of the deal.IIdeal.II numerical library, whose mesh adaptation capabilities are also used to reduce the region in which the low order approximation is used. A number of numerical experiments demonstrate the potential of the proposed filtering technique

    Theoretical performance analysis of the W-ABORT detector

    Get PDF
    In a recent paper we introduced a modification of the adaptive beaniformer orthogonal rejection test (ABORT) for adaptive detection of signals in unknown noise, by supposing under the null hypothesis the presence of signals orthogonal to the nominal steering vector in the whitened observation space. We will refer to this new receiver as the whitened adaptive beamformer orthogonal rejection test (W-ABORT). Through Monte Carlo simulations this new detector was shown to provide better rejection capabilities of mismatched (e.g., sidelobe) signals than existing ones, like ABORT or the adaptive coherence estimator (ACE), but at the price of a certain loss in terms of detection of matched (i.e., mainlobe) signals. The aim of this paper is to provide a theoretical validation of this fact. We consider both the case of distributed targets and point-like targets. We provide a statistical characterization of the W-ABORT test statistic, under the null hypothesis, and for matched and mismatched signals under the alternative hypothesis. For distributed targets, the probability of false alarm and the probability of detection can only be expressed in terms of multi-dimensional integrals, and are thus very complicated to obtain; in contrast, for point-like targets, such probabilities can be easily calculated by numerical integration techniques. The theoretical expressions derived herein corroborate the simulation results obtained previously

    An improved adaptive sidelobe blanker

    Get PDF
    We propose a two-stage detector consisting of a subspace detector followed by the whitened adaptive beamformer orthogonal rejection test. The performance analysis shows that it possesses the constant false alarm rate property with respect to the unknown covariance matrix of the noise and that it can guarantee a wider range of directivity values with respect to previously proposed two-stage detectors. The probability of false alarm and the probability of detection (for both matched and mismatched signals) have been evaluated by means of numerical integration techniques

    GLRT-Based Direction Detectors in Homogeneous Noise and Subspace Interference

    Get PDF
    In this paper, we derive and assess decision schemes to discriminate, resorting to an array of sensors, between the H0 hypothesis that data under test contain disturbance only (i.e., noise plus interference) and the H1 hypothesis that they also contain signal components along a direction which is a priori unknown but constrained to belong to a given subspace of the observables. The disturbance is modeled in terms of complex normal random vectors plus deterministic interference assumed to belong to a known subspace. We assume that a set of noise-only (secondary) data is available, which possess the same statistical characterization of noise in the cells under test. At the design stage, we resort to either the plain generalized-likelihood ratio test (GLRT) or the two-step GLRT-based design procedure. The performance analysis, conducted resorting to simulated data, shows that the one-step GLRT performs better than the detector relying on the two-step design procedure when the number of secondary data is comparable to the number of sensors; moreover, it outperforms a one-step GLRT-based subspace detector when the dimension of the signal subspace is sufficiently high

    Empirical Evidences on the Interconnectedness between Sampling and Asset Returns' Distributions

    Get PDF
    The aim of this work was to test how returns are distributed across multiple asset classes, markets and sampling frequency. We examine returns of swaps, equity and bond indices as well as the rescaling by their volatilities over different horizons (since inception to Q2-2020). Contrarily to some literature, we find that the realized distributions of logarithmic returns, scaled or not by the standard deviations, are skewed and that they may be better fitted by t-skew distributions. Our finding holds true across asset classes, maturity and developed and developing markets. This may explain why models based on dynamic conditional score (DCS) have superior performance when the underlying distribution belongs to the t-skew family. Finally, we show how sampling and distribution of returns are strictly connected. This is of great importance as, for example, extrapolating yearly scenarios from daily performances may prove not to be correct

    Interest rates forecasting: between Hull and White and the {CIR}{#}. How to make a single factor model work

    Get PDF
    In this work we present our findings of the so‐called CIR#, which is a modified version of the Cox, Ingersoll & Ross (CIR) model, turned into a forecasting tool for any term structure. The main feature of the CIR# model is its ability to cope with negative interest rates, cluster volatility and jumps. By considering a dataset composed of money market interest rates during turmoil and calmer periods, we show how the CIR# performs in terms of directionality of rates and forecasting error. Comparison is carried out with a revamped version of the CIR model (denoted CIRadj), the Hull and White model and the EWMA which is often adopted whenever no structure in data is assumed. Testing and validation is performed on both historical and had hoc data with different metrics and clustering criteria to confirm the analysis
    corecore